The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The automation of an increasingly large number of software engineering tasks is becoming possible thanks to Machine Learning (ML). One foundational building block in the application of ML to software artifacts is the representation of these artifacts (e.g., source code or executable code) into a form that is suitable for learning. Many studies have leveraged representation learning, delegating to ML itself the job of automatically devising suitable representations. Yet, in the context of Android problems, existing models are either limited to coarse-grained whole-app level (e.g., apk2vec) or conducted for one specific downstream task (e.g., smali2vec). Our work is part of a new line of research that investigates effective, task-agnostic, and fine-grained universal representations of bytecode to mitigate both of these two limitations. Such representations aim to capture information relevant to various low-level downstream tasks (e.g., at the class-level). We are inspired by the field of Natural Language Processing, where the problem of universal representation was addressed by building Universal Language Models, such as BERT, whose goal is to capture abstract semantic information about sentences, in a way that is reusable for a variety of tasks. We propose DexBERT, a BERT-like Language Model dedicated to representing chunks of DEX bytecode, the main binary format used in Android applications. We empirically assess whether DexBERT is able to model the DEX language and evaluate the suitability of our model in two distinct class-level software engineering tasks: Malicious Code Localization and Defect Prediction. We also experiment with strategies to deal with the problem of catering to apps having vastly different sizes, and we demonstrate one example of using our technique to investigate what information is relevant to a given task.
translated by 谷歌翻译
Open Information Extraction (OpenIE) aims to extract relational tuples from open-domain sentences. Traditional rule-based or statistical models have been developed based on syntactic structures of sentences, identified by syntactic parsers. However, previous neural OpenIE models under-explore the useful syntactic information. In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures. To better fuse heterogeneous information from both graphs, we adopt multi-view learning to capture multiple relationships from them. Finally, the finetuned constituency and dependency representations are aggregated with sentential semantic representations for tuple generation. Experiments show that both constituency and dependency information, and the multi-view learning are effective.
translated by 谷歌翻译
We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision. Map-based memory provides important contextual information for visual navigation, and exhibits unique spatial structure mainly composed of flat walls and rectangular obstacles. Our adaptation approach encourages the inherent regularities on the estimated maps to guide the agent to overcome the prevalent domain discrepancy in a novel environment. Specifically, we propose an efficient learning curriculum to handle the visual and dynamics corruptions in an online manner, self-supervised with pseudo clean maps generated by style transfer networks. Because the map-based representation provides spatial knowledge for the agent's policy, our formulation can deploy the pretrained policy networks from simulators in a new setting. We evaluate MoDA in various practical scenarios and show that our proposed method quickly enhances the agent's performance in downstream tasks including localization, mapping, exploration, and point-goal navigation.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
尽管图形神经网络(GNNS)已成功地用于节点分类任务并在图中链接预测任务,但学习图级表示仍然是一个挑战。对于图级表示,重要的是要学习相邻节点的表示形式,即聚合和图形结构信息。为此目标开发了许多图形合并方法。但是,大多数现有的合并方法都使用K-HOP社区,而无需考虑图中的明确结构信息。在本文中,我们提出了使用先前的图形结构来克服限制的结构原型指导池(SPGP)。 SPGP将图形结构制定为可学习的原型向量,并计算节点和原型矢量之间的亲和力。这导致了一种新颖的节点评分方案,该方案在封装图形的有用结构的同时优先考虑信息性节点。我们的实验结果表明,SPGP的精度和可扩展性都优于图形分类基准数据集上的最先进的图形合并方法。
translated by 谷歌翻译
本文介绍了一种数据驱动的形状完成方法,该方法着重于完成3D形状缺失区域的几何细节。我们观察到,现有的生成方法缺乏训练数据和表示能力,可以通过复杂的几何形状和拓扑合成合理的,细粒度的细节。我们的关键见解是从部分输入复制和变形补丁以完成缺失区域。这使我们能够保留本地几何特征的风格,即使它与培训数据有很大不同。我们的全自动方法分为两个阶段。首先,我们学会从输入形状检索候选补丁。其次,我们选择并变形了一些检索到的候选者,以无缝将它们融合到完整的形状中。该方法结合了两种最常见的完成方法的优点:基于相似性的单稳定性完成,以及通过学习形状空间来完成。我们通过从部分输入中检索贴片来利用重复模式,并通过使用神经网络来指导检索和变形步骤来学习全球结构先验。实验结果表明,我们的方法在多个数据集和形状类别上的表现非常优于基线。代码和数据可在https://github.com/gitbosun/patchrd上找到。
translated by 谷歌翻译
作为人类,我们可以通过想象我们的思想中的替代对象或概念来修改对场景的假设。例如,我们可以轻松地预见到雨云(例如,街道会被弄湿),并为此做准备。在本文中,我们介绍了一个新任务/数据集,称为反事实场景(COSIM),旨在评估AI系统对场景变化想象的推论的能力。在此任务/数据集中,为图像提供了模型和一个初始的问题响应对。接下来,应用了反事实想象的场景更改(以文本形式),该模型必须根据此场景更改预测对初始问题的新回答。我们收集3.5k高质量和具有挑战性的数据实例,每个实例都包含图像,一个常识性问题,响应,对反事实变化的描述,对问题的新回答以及三个干扰器响应。我们的数据集包含各种复杂的场景更改类型(例如对象添加/删除/状态更改,事件描述,环境更改等),这些更改需要模型来想象许多不同的场景和原因。我们提出了基于视觉变压器(即LXMERT)和消融研究的基线模型。通过人类的评估,我们证明了较大的人类模型性能差距,这为有望在这项具有挑战性的反事实,场景想象任务上做出有希望的未来工作。我们的代码和数据集可公开可用:https://github.com/hyounghk/cosim
translated by 谷歌翻译